self-paced contrastive learning
Self-Paced Contrastive Learning for Semi-supervised Medical Image Segmentation with Meta-labels
The contrastive pre-training of a recognition model on a large dataset of unlabeled data often boosts the model's performance on downstream tasks like image classification. However, in domains such as medical imaging, collecting unlabeled data can be challenging and expensive. In this work, we consider the task of medical image segmentation and adapt contrastive learning with meta-label annotations to scenarios where no additional unlabeled data is available. Meta-labels, such as the location of a 2D slice in a 3D MRI scan, often come for free during the acquisition process. We use these meta-labels to pre-train the image encoder, as well as in a semi-supervised learning step that leverages a reduced set of annotated data. A self-paced learning strategy exploiting the weak annotations is proposed to furtherhelp the learning process and discriminate useful labels from noise. Results on five medical image segmentation datasets show that our approach: i) highly boosts the performance of a model trained on a few scans, ii) outperforms previous contrastive and semi-supervised approaches, and iii) reaches close to the performance of a model trained on the full data.
Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID
Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain to tackle the open-class re-identification problems. Although state-of-the-art pseudo-label-based methods have achieved great success, they did not make full use of all valuable information because of the domain gap and unsatisfying clustering performance. To solve these problems, we propose a novel self-paced contrastive learning framework with hybrid memory. The hybrid memory dynamically generates source-domain class-level, target-domain cluster-level and un-clustered instance-level supervisory signals for learning feature representations. Different from the conventional contrastive learning strategy, the proposed framework jointly distinguishes source-domain classes, and target-domain clusters and un-clustered instances. Most importantly, the proposed self-paced method gradually creates more reliable clusters to refine the hybrid memory and learning targets, and is shown to be the key to our outstanding performance. Our method outperforms state-of-the-arts on multiple domain adaptation tasks of object re-ID and even boosts the performance on the source domain without any extra annotations. Our generalized version on unsupervised object re-ID surpasses state-of-the-art algorithms by considerable 16.7% and 7.9% on Market-1501 and MSMT17 benchmarks.
Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID Supplementary Material
Dapeng Chen is the corresponding author. The initial learning rate is set to 0.00035 and is decreased to 1/10 of its previous value every 20 epochs in the total 50 epochs. Table 7, significant 4.8% mAP improvements can be observed when applying the self-paced learning What is interesting is that the final performance is even better than that on DBSCAN. Experiments are conducted on the tasks of unsupervised person re-ID. Market-1501, and the chosen hyper-parameters are directly applied to all the other tasks.
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area (0.95)
Review for NeurIPS paper: Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID
Weaknesses: - The main idea of this method is unified contrastive learning. However, the strategy of joint learning of source and target domain is not new although different methods implement with different losses (e.g., in [57,58]). It is also natural that the performance on source domain with joint learning of source and target domains is higher than finetuing with target data only. Besides, the form of non-parametric contrastive learning is widely used in general unsupervised visual representation learning methods (such as MoCo and SimCLR) and is not new in this method. It may meet with the current UDA benchmarks but the generality of this method based on such assumption is limited in those real-world practical application scenarios where no prior knowledge are available on target data. Existing methods which optimize source and target domains separately thus show more advantages in this aspect.
Review for NeurIPS paper: Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID
Three of the four reviewers originally recommended marginal accept or accept (7, 6, 6) as they felt the paper provided a good empirical contribution to the field of adaptive re-identification and its results were strong. R9 was more negative and had concerns around experiments. One reviewer pointed out that the DukeMTMC extensively used in the paper has been taken down 12 months ago and its use should be discontinued. Because of the ethical concerns around this, the paper underwent additional review by the ethics panel, which recommended that the dataset should NOT be used in an accepted NeurIPS paper. Some excerpts from the ethics reviewers are below: -- "... the dataset collection involved non-consensual video surveillance of students on Duke University campus. It is unlikely that all students even knew they were being recorded, and their relative lack of power with respect to the institution surveilling them also raises concerns about the ability to meaningfully object to the surveillance."
Self-Paced Contrastive Learning for Semi-supervised Medical Image Segmentation with Meta-labels
The contrastive pre-training of a recognition model on a large dataset of unlabeled data often boosts the model's performance on downstream tasks like image classification. However, in domains such as medical imaging, collecting unlabeled data can be challenging and expensive. In this work, we consider the task of medical image segmentation and adapt contrastive learning with meta-label annotations to scenarios where no additional unlabeled data is available. Meta-labels, such as the location of a 2D slice in a 3D MRI scan, often come for free during the acquisition process. We use these meta-labels to pre-train the image encoder, as well as in a semi-supervised learning step that leverages a reduced set of annotated data.
Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID
Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain to tackle the open-class re-identification problems. Although state-of-the-art pseudo-label-based methods have achieved great success, they did not make full use of all valuable information because of the domain gap and unsatisfying clustering performance. To solve these problems, we propose a novel self-paced contrastive learning framework with hybrid memory. The hybrid memory dynamically generates source-domain class-level, target-domain cluster-level and un-clustered instance-level supervisory signals for learning feature representations. Different from the conventional contrastive learning strategy, the proposed framework jointly distinguishes source-domain classes, and target-domain clusters and un-clustered instances.